-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix sector expiration #1721
Fix sector expiration #1721
Conversation
@@ -687,6 +688,16 @@ where | |||
for archived_segment in archiver.add_block(encoded_block, block_object_mappings) { | |||
let segment_header = archived_segment.segment_header; | |||
|
|||
if let Err(error) = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unrelated to this change(github doesn't allow comments on unchanged lines) ..
L612 above (block_number.checked_sub(&confirmation_depth_k.into()
...). This looks like an edge case where we won't archive blocks when the u32 wraps around. When this happens, there could be a slow path that just walks back the parent hashes to get to the block k_conf_depth old).
Or we could opportunistically cache the block hashes as import notifications come in (this won't work on restart or reorg), then fall back to the slow path
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
u32
will never wrap around, our block number type is u32
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean when it reaches U32::MAX and goes back ..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Block number will never reach u32::MAX
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it because mainnet needs to be running for a very loong time (at 1 block per 6 sec) to get there? nice problem to have though :-
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly. Even if blocks are produced every second running out of u32
will not be an issue to worry for quite a few decades. I hope Subspace lives long enough for this to be a real issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks
@@ -687,6 +688,16 @@ where | |||
for archived_segment in archiver.add_block(encoded_block, block_object_mappings) { | |||
let segment_header = archived_segment.segment_header; | |||
|
|||
if let Err(error) = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean when it reaches U32::MAX and goes back ..
// +1 means we will start replotting a bit before it actually expires to avoid | ||
// storing expired sectors | ||
if *sector_expire_at | ||
<= (archived_segment_header.segment_index() + SegmentIndex::ONE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this + 1 be made a method like inc()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly, do you think it would be more readable? Rust doesn't have increments and I got used to it eventually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just preference, this is fine too
#1647 implemented expiration check incorrectly, because of that as soon as sector expiration is known, consensus was rejecting it as expired 🤦♂️. While fixing this I also made farmer start replotting one segment ahead of time such that farmer is much less likely to store already expired sectors.
It worked fine though, sector was successfully replotted up until #1692, where due to ordering of operations it was not possible to retrieve segment header of the segment header that the rest of the system was just notified about.
Last commit is just some helpful extra logging.
We really need to write decent number of tests around all this though 😞
Code contributor checklist: